53 research outputs found
Unbiased Black-Box Complexities of Jump Functions
We analyze the unbiased black-box complexity of jump functions with small,
medium, and large sizes of the fitness plateau surrounding the optimal
solution.
Among other results, we show that when the jump size is , that is, only a small constant fraction of the fitness values
is visible, then the unbiased black-box complexities for arities and higher
are of the same order as those for the simple \textsc{OneMax} function. Even
for the extreme jump function, in which all but the two fitness values
and are blanked out, polynomial-time mutation-based (i.e., unary unbiased)
black-box optimization algorithms exist. This is quite surprising given that
for the extreme jump function almost the whole search space (all but a
fraction) is a plateau of constant fitness.
To prove these results, we introduce new tools for the analysis of unbiased
black-box complexities, for example, selecting the new parent individual not by
comparing the fitnesses of the competing search points, but also by taking into
account the (empirical) expected fitnesses of their offspring.Comment: This paper is based on results presented in the conference versions
[GECCO 2011] and [GECCO 2014
Bounding Bloat in Genetic Programming
While many optimization problems work with a fixed number of decision
variables and thus a fixed-length representation of possible solutions, genetic
programming (GP) works on variable-length representations. A naturally
occurring problem is that of bloat (unnecessary growth of solutions) slowing
down optimization. Theoretical analyses could so far not bound bloat and
required explicit assumptions on the magnitude of bloat. In this paper we
analyze bloat in mutation-based genetic programming for the two test functions
ORDER and MAJORITY. We overcome previous assumptions on the magnitude of bloat
and give matching or close-to-matching upper and lower bounds for the expected
optimization time. In particular, we show that the (1+1) GP takes (i)
iterations with bloat control on ORDER as well as
MAJORITY; and (ii) and
(and for )
iterations without bloat control on MAJORITY.Comment: An extended abstract has been published at GECCO 201
Intuitive Analyses via Drift Theory
Humans are bad with probabilities, and the analysis of randomized algorithms
offers many pitfalls for the human mind. Drift theory is an intuitive tool for
reasoning about random processes. It allows turning expected stepwise changes
into expected first-hitting times. While drift theory is used extensively by
the community studying randomized search heuristics, it has seen hardly any
applications outside of this field, in spite of many research questions which
can be formulated as first-hitting times.
We state the most useful drift theorems and demonstrate their use for various
randomized processes, including approximating vertex cover, the coupon
collector process, a random sorting algorithm, and the Moran process. Finally,
we consider processes without expected stepwise change and give a lemma based
on drift theory applicable in such scenarios without drift. We use this tool
for the analysis of the gambler's ruin process, for a coloring algorithm, for
an algorithm for 2-SAT, and for a version of the Moran process without bias
Theoretical Study of Optimizing Rugged Landscapes with the cGA
Estimation of distribution algorithms (EDAs) provide a distribution - based
approach for optimization which adapts its probability distribution during the
run of the algorithm. We contribute to the theoretical understanding of EDAs
and point out that their distribution approach makes them more suitable to deal
with rugged fitness landscapes than classical local search algorithms.
Concretely, we make the OneMax function rugged by adding noise to each fitness
value. The cGA can nevertheless find solutions with n(1 - \epsilon) many 1s,
even for high variance of noise. In contrast to this, RLS and the (1+1) EA,
with high probability, only find solutions with n(1/2+o(1)) many 1s, even for
noise with small variance.Comment: 17 pages, 1 figure, PPSN 202
Run Time Bounds for Integer-Valued OneMax Functions
While most theoretical run time analyses of discrete randomized search
heuristics focused on finite search spaces, we consider the search space
. This is a further generalization of the search space of
multi-valued decision variables .
We consider as fitness functions the distance to the (unique) non-zero
optimum (based on the -metric) and the \ooea which mutates by applying
a step-operator on each component that is determined to be varied. For changing
by , we show that the expected optimization time is . In particular, the time is linear in the
maximum value of the optimum . Employing a different step operator which
chooses a step size from a distribution so heavy-tailed that the expectation is
infinite, we get an optimization time of .
Furthermore, we show that RLS with step size adaptation achieves an
optimization time of .
We conclude with an empirical analysis, comparing the above algorithms also
with a variant of CMA-ES for discrete search spaces
- …